理想情况下,机器人应该以最大化关于其内部系统和外部操作环境的状态所获得的知识的方式移动。轨迹设计是一个具有挑战性的问题,从各种角度来看,从信息理论分析到基于倾斜的方法。最近,已经提出了基于可观察性的指标来找到能够快速准确的状态和参数估计的轨迹。这些方法的活力和功效尚未在文献中众所周知。在本文中,我们比较了两个最先进的方法,以便可观察性感知轨迹优化,并寻求增加重要的理论澄清和对其整体效力的宝贵讨论。为了评估,我们使用逼真的物理模拟器检查传感器到传感器外部自校准的代表性任务。我们还研究了这些算法的灵敏度,以改变易欣欣传感器测量的信息内容。
translated by 谷歌翻译
恶劣天气的可靠运行对于部署安全自治车辆(AVS)至关重要。通过熔化来自标准AV传感器套件(即,Lidars,Cameras)的数据,可以实现鲁棒性和可靠性,其中天气强壮的传感器,例如毫米波雷达。批判性地,精确的传感器数据融合需要了解传感器对之间的刚体变换,这可以通过外部校准的过程来确定。已经为2D(平面)雷达传感器设计了许多外部校准算法 - 然而,最近开发的低成本3D毫米波雷达被设定为在许多应用中取代其2D对应物。在本文中,我们提出了一种连续时间3D雷达 - 相机外在校准算法,其利用雷达速度测量,并且与大多数现有技术不同,不需要专门的雷达逆向反射器存在于环境中。我们推出了我们配方的可观察性性质,并通过合成和现实世界实验证明了我们的算法的功效。
translated by 谷歌翻译
在没有对其相对姿势的准确估计的情况下,无法正确融合来自两个传感器的数据,这可以通过外部校准的过程来确定。当两个或更多个传感器能够产生自己的eGomotion估计(即,通过环境测量它们的轨迹),可以采用“手眼”外部校准的制定。在本文中,我们将最近的工作扩展到凸优化方法,以便手眼校准到一个传感器不能观察其翻译运动的比例(例如,观察未拍摄环境的单眼摄像机)。我们证明我们的技术能够为手眼校准的已知和未知级别的变体提供认真的全球最佳解决方案,只要测量噪声被界定。这里,我们专注于问题的理论方面,展示了我们解决方案的密封性和稳定性,并通过合成数据的实验展示了我们算法的最优性和速度。
translated by 谷歌翻译
机器学习(ML)研究通常集中在模型上,而最突出的数据集已用于日常的ML任务,而不考虑这些数据集对基本问题的广度,困难和忠诚。忽略数据集的基本重要性已引起了重大问题,该问题涉及现实世界中的数据级联以及数据集驱动标准的模型质量饱和,并阻碍了研究的增长。为了解决此问题,我们提出Dataperf,这是用于评估ML数据集和数据集工作算法的基准软件包。我们打算启用“数据棘轮”,其中培训集将有助于评估相同问题的测试集,反之亦然。这种反馈驱动的策略将产生一个良性的循环,该循环将加速以数据为中心的AI。MLCommons协会将维护Dataperf。
translated by 谷歌翻译
开放程序代表全球手术的主要形式。人工智能(AI)有可能优化手术实践并改善患者结果,但努力主要集中在微创技术上。我们的工作通过策划,从YouTube,从YouTube,Open Surgical视频的最大数据集克服了培训AI模型的现有数据限制:1997年从50个国家上传的23个外科手术的视频。使用此数据集,我们开发了一种能够实时了解外科行为,手和工具的多任务AI模型 - 程序流程和外科医生技能的构建块。我们表明我们的模型推广了各种外科类型和环境。说明这种普遍性,我们直接应用了YouTube培训的模型,分析了在学术医疗中心前瞻性收集的开放式手术,并确定了与手动效率相关的外科技能的运动学描述符。我们的开放外科(AVOS)数据集和培训模式的注释视频将可用于进一步发展外科艾。
translated by 谷歌翻译
The detection of anomalies in time series data is crucial in a wide range of applications, such as system monitoring, health care or cyber security. While the vast number of available methods makes selecting the right method for a certain application hard enough, different methods have different strengths, e.g. regarding the type of anomalies they are able to find. In this work, we compare six unsupervised anomaly detection methods with different complexities to answer the questions: Are the more complex methods usually performing better? And are there specific anomaly types that those method are tailored to? The comparison is done on the UCR anomaly archive, a recent benchmark dataset for anomaly detection. We compare the six methods by analyzing the experimental results on a dataset- and anomaly type level after tuning the necessary hyperparameter for each method. Additionally we examine the ability of individual methods to incorporate prior knowledge about the anomalies and analyse the differences of point-wise and sequence wise features. We show with broad experiments, that the classical machine learning methods show a superior performance compared to the deep learning methods across a wide range of anomaly types.
translated by 谷歌翻译
Recently, many causal estimators for Conditional Average Treatment Effect (CATE) and instrumental variable (IV) problems have been published and open sourced, allowing to estimate granular impact of both randomized treatments (such as A/B tests) and of user choices on the outcomes of interest. However, the practical application of such models has ben hampered by the lack of a valid way to score the performance of such models out of sample, in order to select the best one for a given application. We address that gap by proposing novel scoring approaches for both the CATE case and an important subset of instrumental variable problems, namely those where the instrumental variable is customer acces to a product feature, and the treatment is the customer's choice to use that feature. Being able to score model performance out of sample allows us to apply hyperparameter optimization methods to causal model selection and tuning. We implement that in an open source package that relies on DoWhy and EconML libraries for implementation of causal inference models (and also includes a Transformed Outcome model implementation), and on FLAML for hyperparameter optimization and for component models used in the causal models. We demonstrate on synthetic data that optimizing the proposed scores is a reliable method for choosing the model and its hyperparameter values, whose estimates are close to the true impact, in the randomized CATE and IV cases. Further, we provide examles of applying these methods to real customer data from Wise.
translated by 谷歌翻译
In this paper, we present a novel and effective framework, named 4K-NeRF, to pursue high fidelity view synthesis on the challenging scenarios of ultra high resolutions, building on the methodology of neural radiance fields (NeRF). The rendering procedure of NeRF-based methods typically relies on a pixel wise manner in which rays (or pixels) are treated independently on both training and inference phases, limiting its representational ability on describing subtle details especially when lifting to a extremely high resolution. We address the issue by better exploring ray correlation for enhancing high-frequency details benefiting from the use of geometry-aware local context. Particularly, we use the view-consistent encoder to model geometric information effectively in a lower resolution space and recover fine details through the view-consistent decoder, conditioned on ray features and depths estimated by the encoder. Joint training with patch-based sampling further facilitates our method incorporating the supervision from perception oriented regularization beyond pixel wise loss. Quantitative and qualitative comparisons with modern NeRF methods demonstrate that our method can significantly boost rendering quality for retaining high-frequency details, achieving the state-of-the-art visual quality on 4K ultra-high-resolution scenario. Code Available at \url{https://github.com/frozoul/4K-NeRF}
translated by 谷歌翻译
In collective decision-making, designing algorithms that use only local information to effect swarm-level behaviour is a non-trivial problem. We used machine learning techniques to teach swarm members to map their local perceptions of the environment to an optimal action. A curriculum inspired by Machine Education approaches was designed to facilitate this learning process and teach the members the skills required for optimal performance in the collective perception problem. We extended upon previous approaches by creating a curriculum that taught agents resilience to malicious influence. The experimental results show that well-designed rules-based algorithms can produce effective agents. When performing opinion fusion, we implemented decentralised resilience by having agents dynamically weight received opinion. We found a non-significant difference between constant and dynamic weights, suggesting that momentum-based opinion fusion is perhaps already a resilience mechanism.
translated by 谷歌翻译
本论文的重点广泛放在词典数据的对齐方式上,尤其是词典。为了应对该领域的某些挑战,解决了两个主要的词性一致性和翻译推理的主要任务。鉴于在两个不同的单语词典中的head词的感觉定义,第一个任务旨在找到最佳的对齐。这是一项具有挑战性的任务,尤其是由于意义粒度,覆盖范围和两个资源的描述的差异。在描述了各种词汇语义资源的特征之后,我们引入了一个基准测试,其中包含17种语言的数据集,其中单词感官感官和定义由专家在不同资源中手动注释。在创建基准测试的过程中,词典学家的知识是通过注释来纳入的,在该注释中,每个感觉对选择了语义关系,即精确,狭窄,更广泛,相关或不选择。该基准可用于评估单词态对准系统。使用基准评估了基于文本和非文本语义相似性检测和语义关系诱导的几种比对技术的性能。最后,我们将这项工作扩展到翻译推断,其中诱导翻译对以基于图形分析的各种方法无监督的方式生成双语词典。这项任务特别是为资源较低和代表性不足的语言创建词典资源而特别感兴趣,并且还可以帮助增加对现有资源的覆盖范围。从实际的角度来看,本文中开发的技术和方法是在可以促进对齐任务的工具中实现的。
translated by 谷歌翻译